11 research outputs found

    A Paradigm for Safe Adaptation of Collaborating Robots

    Get PDF
    The dynamic forces that transit back and forth traditional boundaries of system development have led to the emergence of digital ecosystems. Within these, business gains are achieved through the development of intelligent control that requires a continuous design and runtime co-engineering process endangered by malicious attacks. The possibility of inserting specially crafted faults capable to exploit the nature of unknown evolving intelligent behavior raises the necessity of malicious behavior detection at runtime.Adjusting to the needs and opportunities of fast AI development within digital ecosystems, in this paper, we envision a novel method and framework for runtime predictive evaluation of intelligent robots' behavior for assuring a cooperative safe adjustment

    Towards Trusting the Ethical Evolution of Autonomous Dynamic Ecosystems

    No full text
    Until recently, systems and networks have been designed to implement established actions within known contexts. However, gaining the human trust in system behavior requires development of artificial ethical agents proactively acting outside fixed context boundaries for mitigating dangerous situations in which other interacting entities find themselves. A proactive altruistic behavior oriented towards removing danger needs to rely on predictive awareness of a dangerous situation. Differentthatcurrentapproachesfordesigningcognitivearchitectures, in this paper, we introduce a method that enables the creation of artificial altruistic trusted behavior together with an architecture of the framework that enables its implementation

    On Autonomous Dynamic Software Ecosystems

    No full text
    Software ecosystems are considered the natural evolution of software product lines. A software ecosystem provides a (software) product within a particular business and organizational context that supports the exchange of activities and services within a domain. However, the increasing degree of autonomy demanded by software ecosystems is elevating the system response to end users, while the existing software ecosystem architectures are not well prepared to deal with the dynamicity of context changes and autonomous behavior needs. In order to provide a transition toward an increased level of autonomy, in this article, we introduce the notion of autonomous dynamic ecosystems as representative of those software ecosystems able to support dynamic, smart, and autonomous features demanded by modern software systems. In this work, we further investigate and provide evidence of four industrial examples that have fully embodied the principles of autonomous dynamic ecosystems, and we characterize the main features and technology requirements of this kind of new ecosystems

    Simulation methods and tools for collaborative embedded systems: with focus on the automotive smart ecosystems

    No full text
    Embedded Systems are increasingly equipped with open interfaces that enable communication and collaboration with other embedded systems. Collaborative embedded systems (CES) can be seen as an emerging new class of systems which, although individually designed and developed, can form collaborations at runtime. When embedded systems collaborate with each other, functions developed independently need to be integrated for performing evaluation of the resulting system in order to discover unwanted side-effects. Traditionally, early-stage validation and verification (V&V) of systems composed of collaborative subsystems is performed by function integration at design time. Simulation is used at this stage to verify system’s behaviour in a predefined set of test scenarios. In this paper we provide a survey of simulation methods and tools for the V&V of CES. In the context of one use case from the automotive domain (vehicle platooning) we present solutions (methods and tools) and challenges brought by evaluating vehicle collaboration using simulation.BMBF, 01IS16043R, Verbundprojekt CrESt: Modellbasierte Entwicklung kollaborativer eingebetteter System

    EURECA: Epistemic uncertainty classification scheme for runtime information exchange in collaborative system groups

    No full text
    Collaborative embedded systems (CES) typically operate in highly dynamic contexts that cannot be completely predicted during design time. These systems are subject to a wide range of uncertainties occurring at runtime, which can be distinguished in aleatory or epistemic. While aleatory uncertainty refers to stochasticity that is present in natural or physical processes and systems, epistemic uncertainty refers to the knowledge that is available to the system, for example, in the form of an ontology, being insufficient for the functionalities that require certain knowledge. Even though both of these two kinds of uncertainties are relevant for CES, epistemic uncertainties are especially important, since forming collaborative system groups requires a structured exchange of information. In the autonomous driving domain for instance, the information exchange between different CES of different vehicles may be related to own or environmental behavior, goals or functionalities. By today, the systematic identification of epistemic uncertainties sourced in the information exchange is insufficiently explored, as only some specialized classifications for uncertainties in the area of self-adaptive systems exist. This paper contributes an epistemic uncertainty classification scheme for runtime information exchange (EURECA) in collaborative system groups. By using this classification scheme, it is possible to identify the relevant epistemic sources of uncertainties for a CES during requirements engineering

    Predictive Runtime Simulation for Building Trust in Cooperative Autonomous Systems

    No full text
    Future autonomous systems will also be cooperative systems. They will interact with each other, with traffic infrastructure, with cloud services and with other systems. In such an open ecosystem trust is of fundamental importance, because cooperation between systems is key for many innovation applications and services. Without an adequate notion of trust, as well as means to maintain and use it, the full potential of autonomous systems thus cannot be unlocked. In this paper, we discuss what constitutes trust in autonomous cooperative systems and sketch out a corresponding multifaceted notion of trust. We then go on to discuss a predictive runtime simulation approach as a building block for trust and elaborate on means to secure this approach

    Accelerated simulated fault injection testing

    No full text
    Fault injection testing approaches assess the reliability of execution environments for critical software. They support the early testing of safety concepts that mitigate the impact of hardware failures on software behavior. The growing use of platform software for embedded systems raises the need to verify safety concepts that execute on top of operating systems and middleware platforms. Current fault injection techniques consider the resulting software stack as one black box and attempt to test the reaction of all components in the context of faults. This leads to very high software complexity and consequently requires a very high number of fault injection experiments. Testing the software components, such as control functions, operating systems, and middleware, individually would lead to a significant reduction of the number of experiments required. In this paper, we illustrate our novel approach for fault injection testing, which considers the components of a software stack, enables re-use of previously collected evidences, allows focusing testing on highly critical parts of the control software, and significantly lowers the number of experiments required
    corecore